54 research outputs found

    Data Access for LIGO on the OSG

    Full text link
    During 2015 and 2016, the Laser Interferometer Gravitational-Wave Observatory (LIGO) conducted a three-month observing campaign. These observations delivered the first direct detection of gravitational waves from binary black hole mergers. To search for these signals, the LIGO Scientific Collaboration uses the PyCBC search pipeline. To deliver science results in a timely manner, LIGO collaborated with the Open Science Grid (OSG) to distribute the required computation across a series of dedicated, opportunistic, and allocated resources. To deliver the petabytes necessary for such a large-scale computation, our team deployed a distributed data access infrastructure based on the XRootD server suite and the CernVM File System (CVMFS). This data access strategy grew from simply accessing remote storage to a POSIX-based interface underpinned by distributed, secure caches across the OSG.Comment: 6 pages, 3 figures, submitted to PEARC1

    Naturaleza y cultura en Ámerica Latina

    Get PDF
    La concreciĂłn del XVIII Foro de Estudiantes Latinoamericanos de AntrologĂ­a y ArqueologĂ­a: Cultura y naturaleza en AmĂ©rica Latina: escenarios para un modelo de desarrollo no civilizatorio, efectuado en Quito desde el 17 al 23 de julio del 2011, se constituyĂł en un acontecimiento sumamente significativo para la antropologĂ­a latinoamericana debido a dos motivos. Primero porque coincidiĂł con la emergencia del movimiento universitario estudiantil latinoamericano que expresaba sus tendencias, propuestas y exigencias de cambios tanto de las prĂĄcticas acadĂ©micas como de los patrones civilizatorios que rigen las relaciones actuales. Segundo, porque se inscribĂ­a en un contexto de consolidaciĂłn de las nuevas democracias de los paĂ­ses andinos, de carĂĄcter antineoliberal y basadas en los sujetos de derecho entre los cuales se incluye la naturaleza. Estos contextos determinaron que el Foro no ponga en escena certidumbres teĂłricas o metodolĂłgicas, ni se preste al exhibicionismo estĂ©ril de los avances disciplinares. MĂĄs bien, la convocatoria de la antropologĂ­a y la arqueologĂ­a fue apenas un pretexto para hablar, con su lenguaje, de nosotros mismos, de lo que somos, de lo que pensamos, de lo que aspiramos y sentimos sobre nuestra LatinoamĂ©rica. Lo que hemos visto, oĂ­do y compartido, en realidad, no han sido solamente ideas o conceptos sino opciones y toma de posiciones respecto a mĂșltiples encrucijadas. PosiciĂłn ante situaciones que amenazan la vida, la justicia y los derechos de todos, un desafĂ­o epistemolĂłgico todavĂ­a en ciernes y que no termina de cuajar aĂșn en nuestras prĂĄcticas acadĂ©micas

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    A new era for central processing and production in CMS

    No full text
    The goal for CMS computing is to maximise the throughput of simulated event generation while also processing the real data events as quickly and reliably as possible. To maintain this achievement as the quantity of events increases, since the beginning of 2011 CMS computing has migrated at the Tier 1 level from its old production framework, ProdAgent, to a new one, WMAgent. The WMAgent framework offers improved processing efficiency and increased resource usage as well as a reduction in manpower. In addition to the challenges encountered during the design of the WMAgent framework, several operational issues have arisen during its commissioning. The largest operational challenges were in the usage and monitoring of resources, mainly a result of a change in the way work is allocated. Instead of work being assigned to operators, all work is centrally injected and managed in the Request Manager system and the task of the operators has changed from running individual workflows to monitoring the global workload. In this report we present how we tackled some of the operational challenges, and how we benefitted from the lessons learned in the commissioning of the WMAgent framework at the Tier 2 level in late 2011. As case studies, we will show how the WMAgent system performed during some of the large data reprocessing and Monte Carlo simulation campaigns

    Moving the California distributed CMS xcache from bare metal into containers using Kubernetes

    No full text
    The University of California system has excellent networking between all of its campuses as well as a number of other Universities in CA, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech have thus joined their disk systems into a single logical xcache system, with worker nodes from both sites accessing data from disks at either site. This setup has been in place for a couple years now and has shown to work very well. Coherently managing nodes at multiple physical locations has however not been trivial, and we have been looking for ways to improve operations. With the Pacific Research Platform (PRP) now providing a Kubernetes resource pool spanning resources in the science DMZs of all the UC campuses, we have recently migrated the xcache services from being hosted bare-metal into containers. This talk presents our experience in both migrating to and operating in the new environment

    Improving WLCG networks through monitoring and analytics

    No full text
    WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. OSG Networking Area in partnership with WLCG has focused on collecting, storing and making available all the network related metrics for further analysis and discovery of issues that might impact network performance and operations. In order to help sites and experiments better understand and fix the networking issues, WLCG Network Throughput working group was formed, which works on the analysis and integration of the network-related monitoring data collected by the OSG/WLCG infrastructure and operates a support unit to help find and fix the network performance issues. This paper describes the current state of the OSG network measurement platform and summarises the activities taken by the working group, including updates on the higher level services that were recently developed, network performance incidents investigated as well as past and present analytical activities related to networking and their results

    Improving WLCG Networks Through Monitoring and Analytics

    Get PDF
    WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. OSG Networking Area in partnership with WLCG has focused on collecting, storing and making available all the network related metrics for further analysis and discovery of issues that might impact network performance and operations. In order to help sites and experiments better understand and fix the networking issues, WLCG Network Throughput working group was formed, which works on the analysis and integration of the network-related monitoring data collected by the OSG/WLCG infrastructure and operates a support unit to help find and fix the network performance issues. This paper describes the current state of the OSG network measurement platform and summarises the activities taken by the working group, including updates on the higher level services that were recently developed, network performance incidents investigated as well as past and present analytical activities related to networking and their results

    WLCG Networks: Update on Monitoring and Analytics

    No full text
    WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues including connection failures, congestion and traffic routing. The OSG Networking Area, in partnership with WLCG, is focused on being the primary source of networking information for its partners and constituents. It was established to ensure sites and experiments can better understand and fix networking issues, while providing an analytics platform that aggregates network monitoring data with higher level workload and data transfer services. This has been facilitated by the global network of the perfSONAR instances that have been commissioned and are operated in collaboration with WLCG Network Throughput Working Group. An additional important update is the inclusion of the newly funded NSF project SAND (Service Analytics and Network Diagnosis) which is focusing on network analytics. This paper describes the current state of the network measurement and analytics platform and summarises the activities taken by the working group and our collaborators. This includes the progress being made in providing higher level analytics, alerting and alarming from the rich set of network metrics we are gathering
    • 

    corecore